MiNT: Multimodal Interaction for Modeling and Model Refactoring

نویسنده

  • Nitesh Narayan
چکیده

The development of software brings together participants from different backgrounds, such as domain experts, analysts, designers, programmers, managers, technical writers, graphic designers, and users. No single participant can understand or control all aspects of the system under development, and thus, all participants depend on others to accomplish their work. Moreover, any change in the system or the application domain requires all participants to update their understanding of the system. The importance of continuous involvement of domain experts in the modeling process is well known. But domain experts are usually not proficient with the modeling tools used by the software developers and as a result are often limited to the initial requirements elicitation. Researchers have provided substantial evidence that multimodal interfaces can greatly expand the accessibility of interfaces to diverse and nonspecialist users. To address these limitations in the collaboration between application domain experts and modelers, we developed MiNT, an extensible platform to add new modalities and to configure multimodal fusion in CASE tools. MiNT is based on the M3 framework that allows capturing multimodal interaction during the design process of new multimodal interfaces. The M3 framework has been developed in a bootstrapping process during the development of MiNT. The viability of MiNT was demonstrated in two reference implementations; Mint Eclipse and Mint Mobile. MiNT Eclipse used the MiNT framework to add multimodality to Eclipsebased modeling. MiNT Mobile provides multimodal modeling and model transformations on mobile devices. We conducted two controlled experiments to study the feasibility and applicability of multimodal interfaces for modeling and model refactoring. The results of the first experiment show that multimodal interfaces employing speech as an input modality improve the efficiency of modelers. Speech additionally allows modelers to verbalize their thoughts and is suitable for collaborative modeling sessions. The results of the second experiment show that a multimodal interface which provides a combination of touch, speech, and touch gestures is more useful than a multimodal interface employing only touch and speech.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Building Multimodal Interfaces Out of Executable, Model-Based Interactors and Mappings

Future interaction will be embedded into smart environments offering the user to choose and to combine a heterogeneous set of interaction devices and modalities based on his preferences realizing an ubiquitous and multimodal access. We propose a model-based runtime environment (the MINT Framework) that describes multimodal interaction by interactors and multimodal mappings. The interactors are ...

متن کامل

A Data Structure for the Refactoring of Multimodal Knowledge

Knowledge often appears in different shapes and formalisms, thus available as multimodal knowledge. This heterogeneity denotes a challenge for the people involved in today’s knowledge engineering tasks. In this paper, we discuss an approach for refactoring of multimodal knowledge on the basis of a generic tree-based data structure. We explain how this data structure is created from documents (i...

متن کامل

Modeling different decision strategies in a time tabled multimodal route planning by integrating the quantifier-guided OWA operators, fuzzy AHP weighting method and TOPSIS

The purpose of Multi-modal Multi-criteria Personalized Route Planning (MMPRP) is to provide an optimal route between an origin-destination pair by considering weights of effective criteria in a way this route can be a combination of public and private modes of transportation. In this paper, the fuzzy analytical hierarchy process (fuzzy AHP) and the quantifier-guided ordered weighted averaging (...

متن کامل

The Eclipse Annotator: an extensible system for multimodal corpus creation

The Eclipse-Annotator is an extensible tool for the creation of multimodal language resources. It is based on the TASX-Annotator, which has been refactored in order to fit into the plugin based architecture of the new application.

متن کامل

NASCENT: An automatic protein interaction network generation tool for non-model organisms

UNLABELLED Large quantity of reliable protein interaction data are available for model organisms in public depositories (e.g., MINT, DIP, HPRD, INTERACT). Most data correspond to experiments with the proteins of Saccharomyces cerevisiae, Drosophila melanogaster, Homo sapiens, Caenorhabditis elegans, Escherichia coli and Mus musculus. For other important organisms the data availability is poor o...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017